54 research outputs found

    PSO-FNN-Based Vertical Handoff Decision Algorithm in Heterogeneous Wireless Networks

    Get PDF
    AbstractAiming at working out the problem that fuzzy logic and neural network based vertical handoff algorithm didn’t consider the load state reasonably in heterogeneous wireless networks, a PSO-FNN-based vertical handoff decision algorithm is proposed. The algorithm executes factors reinforcement learning for the fuzzy neural network (FNN) with the objective of the equal blocking probability to adapt for load state dynamically, and combined with particle swarm optimization (PSO) algorithm with global optimization capability to set initial parameters in order to improve the precision of parameter learning. The simulation results show that the PSO-FNN algorithm can balance the load of heterogeneous wireless networks effectively and decrease the blocking probability as well as handoff call blocking probability compared to sum-received signal strength (S-RSS) algorithm

    PV-RCNN++: Point-Voxel Feature Set Abstraction With Local Vector Representation for 3D Object Detection

    Full text link
    3D object detection is receiving increasing attention from both industry and academia thanks to its wide applications in various fields. In this paper, we propose Point-Voxel Region-based Convolution Neural Networks (PV-RCNNs) for 3D object detection on point clouds. First, we propose a novel 3D detector, PV-RCNN, which boosts the 3D detection performance by deeply integrating the feature learning of both point-based set abstraction and voxel-based sparse convolution through two novel steps, i.e., the voxel-to-keypoint scene encoding and the keypoint-to-grid RoI feature abstraction. Second, we propose an advanced framework, PV-RCNN++, for more efficient and accurate 3D object detection. It consists of two major improvements: sectorized proposal-centric sampling for efficiently producing more representative keypoints, and VectorPool aggregation for better aggregating local point features with much less resource consumption. With these two strategies, our PV-RCNN++ is about 3×3\times faster than PV-RCNN, while also achieving better performance. The experiments demonstrate that our proposed PV-RCNN++ framework achieves state-of-the-art 3D detection performance on the large-scale and highly-competitive Waymo Open Dataset with 10 FPS inference speed on the detection range of 150m * 150m.Comment: Accepted by International Journal of Computer Vision (IJCV), code is available at https://github.com/open-mmlab/OpenPCDe

    Sparse Dense Fusion for 3D Object Detection

    Full text link
    With the prevalence of multimodal learning, camera-LiDAR fusion has gained popularity in 3D object detection. Although multiple fusion approaches have been proposed, they can be classified into either sparse-only or dense-only fashion based on the feature representation in the fusion module. In this paper, we analyze them in a common taxonomy and thereafter observe two challenges: 1) sparse-only solutions preserve 3D geometric prior and yet lose rich semantic information from the camera, and 2) dense-only alternatives retain the semantic continuity but miss the accurate geometric information from LiDAR. By analyzing these two formulations, we conclude that the information loss is inevitable due to their design scheme. To compensate for the information loss in either manner, we propose Sparse Dense Fusion (SDF), a complementary framework that incorporates both sparse-fusion and dense-fusion modules via the Transformer architecture. Such a simple yet effective sparse-dense fusion structure enriches semantic texture and exploits spatial structure information simultaneously. Through our SDF strategy, we assemble two popular methods with moderate performance and outperform baseline by 4.3% in mAP and 2.5% in NDS, ranking first on the nuScenes benchmark. Extensive ablations demonstrate the effectiveness of our method and empirically align our analysis

    MPPNet: Multi-Frame Feature Intertwining with Proxy Points for 3D Temporal Object Detection

    Full text link
    Accurate and reliable 3D detection is vital for many applications including autonomous driving vehicles and service robots. In this paper, we present a flexible and high-performance 3D detection framework, named MPPNet, for 3D temporal object detection with point cloud sequences. We propose a novel three-hierarchy framework with proxy points for multi-frame feature encoding and interactions to achieve better detection. The three hierarchies conduct per-frame feature encoding, short-clip feature fusion, and whole-sequence feature aggregation, respectively. To enable processing long-sequence point clouds with reasonable computational resources, intra-group feature mixing and inter-group feature attention are proposed to form the second and third feature encoding hierarchies, which are recurrently applied for aggregating multi-frame trajectory features. The proxy points not only act as consistent object representations for each frame, but also serve as the courier to facilitate feature interaction between frames. The experiments on large Waymo Open dataset show that our approach outperforms state-of-the-art methods with large margins when applied to both short (e.g., 4-frame) and long (e.g., 16-frame) point cloud sequences. Code is available at https://github.com/open-mmlab/OpenPCDet.Comment: Accepted by ECCV 202

    UniTR: A Unified and Efficient Multi-Modal Transformer for Bird's-Eye-View Representation

    Full text link
    Jointly processing information from multiple sensors is crucial to achieving accurate and robust perception for reliable autonomous driving systems. However, current 3D perception research follows a modality-specific paradigm, leading to additional computation overheads and inefficient collaboration between different sensor data. In this paper, we present an efficient multi-modal backbone for outdoor 3D perception named UniTR, which processes a variety of modalities with unified modeling and shared parameters. Unlike previous works, UniTR introduces a modality-agnostic transformer encoder to handle these view-discrepant sensor data for parallel modal-wise representation learning and automatic cross-modal interaction without additional fusion steps. More importantly, to make full use of these complementary sensor types, we present a novel multi-modal integration strategy by both considering semantic-abundant 2D perspective and geometry-aware 3D sparse neighborhood relations. UniTR is also a fundamentally task-agnostic backbone that naturally supports different 3D perception tasks. It sets a new state-of-the-art performance on the nuScenes benchmark, achieving +1.1 NDS higher for 3D object detection and +12.0 higher mIoU for BEV map segmentation with lower inference latency. Code will be available at https://github.com/Haiyang-W/UniTR .Comment: Accepted by ICCV202

    Advanced Geological Prediction

    Get PDF
    Due to the particularity of the tunnel project, it is difficult to find out the exact geological conditions of the tunnel body during the survey stage. Once it encounters unfavorable geological bodies such as faults, fracture zones, and karst, it will bring great challenges to the construction and will easily cause major problems, economic losses, and casualties. Therefore, it is necessary to carry out geological forecast work in the tunnel construction process, which is of great significance for tunnel safety construction and avoiding major disaster accident losses. This lecture mainly introduces the commonly used methods of geological forecast in tunnel construction, the design principles, and contents of geological forecast and combines typical cases to show the implementation process of comprehensive geological forecast. Finally, the development direction of geological forecast theory, method, and technology is carried out. Prospects provide a useful reference for promoting the development of geological forecast of tunnels
    • …
    corecore